In this project, you'll use generative adversarial networks to generate new images of faces.
You'll be using two datasets in this project:
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).
You'll build the components necessary to build a GANs by implementing the following functions below:
model_inputsdiscriminatorgeneratormodel_lossmodel_opttrainThis will check to make sure you have the correct version of TensorFlow and access to a GPU
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
image_width, image_height, and image_channels.z_dim.Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
input_real = tf.placeholder(tf.float32, [None, image_width, image_height, image_channels], name="input_real")
input_z = tf.placeholder(tf.float32, [None, z_dim], name="input_z")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
#Variable 노드는 다른 언어의 Variable과 달리, 실행 시 텐서플로우가 변경시키는 값이라고 생각하면 이해하기 편하다.(학습하는 과정에서 변경시킨다.)
#tf.placeholder()는 입력 데이터를 만들 때 주로 사용한다. (실제 훈련 예제를 제공하는 변수) - 초기값을 지정할 필요 없다. (모델 입력시 변경되지 않을 데이터)
#tf.Variable()은 데이터의 상태를 저장할 때 주로 사용한다. (가중치나 편향 등의 학습 가능한 변수) - 초기값을 지정해야 한다. (학습 되는 데이터)
#http://stackoverflow.com/questions/36693740/whats-the-difference-between-tf-placeholder-and-tf-variable
return input_real, input_z, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
def leaky_relu(x, alpha=0.01): #leaky_relu. x는 텐서.
return tf.maximum(alpha * x, x)
#Leaky ReLU는 마이너스 입력일 경우 0이 아닌 입력 절대값에 비례한 작은 값을 리턴한다.
#Leaky ReLU를 지원하지 않기 때문에 직접 만들어야 한다. 음수 값 입력에선 alpha *x, 양수 값 입력에선 x 반환
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
def discriminator(images, reuse=False):
"""
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
# TODO: Implement Function
with tf.variable_scope("discriminator", reuse=reuse): #공유 변수
#tf.variable_scope : 공유 변수. 공유 변수를 사용하지 않으면 학습된 변수가 아닌 계속해서 새로운 변수를 생성하게 된다.
#tf.variable_scope에 reuse 키워드를 사용하여 그래프를 다시 작성하는 경우 새 변수를 작성하는 대신 변수를 재사용하도록 지시 할 수 있다.
# Input layer is 28x28x3 : 28x28 dimensional images. RGB
x1 = tf.layers.conv2d(images, 64, 5, strides=2, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same') #CNN
relu1 = leaky_relu(x1)
# 14x14x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = leaky_relu(bn2)
# 7x7x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = leaky_relu(bn3)
# 4x4x256
#tf.layers.batch_normalization : 배치 사용 (인풋). training는 optional. 현재 학습인지 테스트인지 bool로.
#GAN 정확도를 높이기 위해 Batch Normalization를 사용한다. discriminator가 한 번에 여러 샘플을 보도록 Batch Normalization 사용.
##Batch Normalization : 각 layer의 input을 normailze하여, distribution을 일정하게 유지시키려는 시도.
#https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization
#tf.layers.conv2d : CNN 생성. (inputs, filters, kernel_size)
#https://www.tensorflow.org/api_docs/python/tf/layers/conv2d
#kernel_initializer=tf.contrib.layers.xavier_initializer()를 사용해 커널을 초기화하면 더 정확한 결과를 얻을 수 있다.
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256)) #FC input으로 사용하기 위해 reshape
logits = tf.layers.dense(flat, 1) #FC
out = tf.sigmoid(logits) #활성함수 sigmoid
#tf.layers.dense : Functional interface for the densely-connected layer.
#tf.layers.dense를 사용하면 간단하게 신경망을 만들 수 있다. 파라미터는 (인풋, 출력 차원(아웃풋 사이즈)). 나머지는 모두 optional. Relu를 활성화함수로 FC
#https://www.tensorflow.org/api_docs/python/tf/layers/dense
#이전 mnist에서 만든 discriminator 비슷하지만 더 깊고, Batch Normalization 사용
return out, logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
def generator(z, out_channel_dim, is_train=True):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
# TODO: Implement Function
with tf.variable_scope("generator", reuse= not is_train): #공유 변수
#tf.variable_scope : 공유 변수. 공유 변수를 사용하지 않으면 학습된 변수가 아닌 계속해서 새로운 변수를 생성하게 된다.
#tf.variable_scope에 reuse 키워드를 사용하여 그래프를 다시 작성하는 경우 새 변수를 작성하는 대신 변수를 재사용하도록 지시 할 수 있다.
# First fully connected layer
x1 = tf.layers.dense(z, 7*7*512) #인풋: 노이즈 벡터, 출력 사이즈 : 7*7*512 #FC
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 7, 7, 512)) #CNN 인풋에 맞도록 reshape
x1 = tf.layers.batch_normalization(x1, training=is_train) #Batch Normalization
x1 = leaky_relu(x1) #Leaky ReLU activation. 활성 함수로 leaky ReLU 사용.
# 7x7x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same')
#inputs, filters, kernel_size
x2 = tf.layers.batch_normalization(x2, training=is_train)
x2 = leaky_relu(x2)
# 14x14x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same')
x3 = tf.layers.batch_normalization(x3, training=is_train)
x3 = leaky_relu(x3)
# 28x28x128 now
#tf.layers.dense : Functional interface for the densely-connected layer.
#tf.layers.dense를 사용하면 간단하게 신경망을 만들 수 있다. 파라미터는 (인풋, 출력 차원(아웃풋 사이즈)). 나머지는 모두 optional. Relu를 활성화함수로 FC
#https://www.tensorflow.org/api_docs/python/tf/layers/dense
#tf.layers.batch_normalization : 배치 사용 (인풋). training는 optional. 현재 학습인지 테스트인지 bool로.
#GAN 정확도를 높이기 위해 Batch Normalization를 사용한다. discriminator가 한 번에 여러 샘플을 보도록 Batch Normalization 사용.
##Batch Normalization : 각 layer의 input을 normailze하여, distribution을 일정하게 유지시키려는 시도.
#https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization
#tf.layers.conv2d_transpose : 전치된 CNN 생성. (일반 CNN과 반대) (inputs, filters, kernel_size)
#https://www.tensorflow.org/api_docs/python/tf/layers/conv2d_transpose
#kernel_initializer=tf.contrib.layers.xavier_initializer()를 사용해 커널을 초기화하면 더 정확한 결과를 얻을 수 있다.
# Output layer
logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides=1, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same') #CNN
# 28x28xout_channel_dim now
out = tf.tanh(logits) #마지막 활성 함수는 tanh. 28x28 (image 크기)
#tanh는 -1과 1사이의 값을 리턴 cf. sigmoid : 0 ~ 1사이의 값
#이전 mnist에서 만든 generator와 비슷하지만 더 깊고, Batch Normalization 사용
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
discriminator(images, reuse=False)generator(z, out_channel_dim, is_train=True)def model_loss(input_real, input_z, out_channel_dim):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
# TODO: Implement Function
smooth=0.1
g_model = generator(input_z, out_channel_dim, is_train=True) #이미지 생성
d_model_real, d_logits_real = discriminator(input_real, reuse=False) #실제 이미지
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True) #생성된 이미지
d_loss_real = tf.reduce_mean( #식별기 실제 이미지 손실
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth)))
#1로 판별해야 하므로 tf.one_like 사용 : 1을 출력하기를 원함
#Discriminator의 일반화를 돕기 위해 매개 변수 smooth를 사용하여 레이블을 1.0에서 0.9로 약간 줄인다.
#일반적으로 성능 향상을 위해 Discriminator와 함께 사용되는 label smoothing이라고 한다.
d_loss_fake = tf.reduce_mean( #식별기 생성 이미지 손실
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
#0으로 판별해야 하므로 tf.zeros_like 사용 : 0을 출력하기를 원함 (Generator로 생성된 것을 가짜로 판별하는 것이 Discriminator의 목적)
#d_loss_real과 비슷하지만 smoothing이 적용되지 않았다
g_loss = tf.reduce_mean( #생성기 손실
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
#1로 판별해야 하므로 tf.one_like 사용 : 1을 출력하기를 원함 (Discriminator를 속이는 것이 Generator의 목적)
d_loss = d_loss_real + d_loss_fake #식별기의 손실은 실제 이미지 손실과 생성된 이미지 손실의 합
#Discriminator의 총 손실은 실제 데이터 손실과 생성 데이터 손실의 합
return d_loss, g_loss
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# TODO: Implement Function
# Get weights and bias to update
t_vars = tf.trainable_variables() #학습된 변수들을 불러 온다.
#trainable=True로 만든 모든 변수 반환
#https://www.tensorflow.org/api_docs/python/tf/trainable_variables
d_vars = [var for var in t_vars if var.name.startswith("discriminator")] #generator에서 만든 변수 뽑아내 저장
g_vars = [var for var in t_vars if var.name.startswith("generator")] #discriminator에서 만든 변수 뽑아내 저장
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): #의존성 지정. 컨텍스트 정의 전 실행되거나 계산되어야 할 목록
#'tf.control_dependencies'문맥은 TensorFlow에게 'tf.nn.batch_normalization'계층을 계산하기 전에
#'train_mean'과 'train_variance'를 계산해야한다고 알려준다.
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
#Adam으로 최적화. var_list로 넘어온 변수들만 최적화한다.
return d_train_opt, g_train_opt
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Implement train to build and train the GANs. Use the following functions you implemented:
model_inputs(image_width, image_height, image_channels, z_dim)model_loss(input_real, input_z, out_channel_dim)model_opt(d_loss, g_loss, learning_rate, beta1)Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
"""
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
# TODO: Build Model
#data_shape[0] : count
#data_shape[1] : width
#data_shape[2] : height
#data_shape[3] : channels
input_real, input_z, lr = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim=z_dim)
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer()) #초기화
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
# batch_images = batch_images * 2 # normalization
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim)) #생성기에 노이즈 추가. 랜덤 노이즈에서 이미지 생성
# Run optimizers
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr: learning_rate})
_ = sess.run(g_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr: learning_rate})
if steps % 10 == 0: #10번 마다 출력
train_loss_d = d_loss.eval({input_real: batch_images, input_z: batch_z})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i + 1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
if steps % 100 == 0: #100번 마다 이미지 출력
show_generator_output(sess, n_images=25, input_z=input_z, out_channel_dim=data_shape[3], image_mode=data_image_mode)
steps += 1
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.